security practitioner
Whitepaper – Practical Attacks On Machine Learning Systems - AI Summary
Written by Chris Anley, Chief Scientist, NCC Group This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems. The objective is to provide some industry perspective to the academic community, while collating helpful references for security practitioners, to enable more effective security auditing and security-focused code review of ML systems. Details of specific practical attacks and common security problems are described. Some general background information on the broader subject of ML is also included, mostly for context, to ensure that explanations of attack scenarios are clear, and some notes on frameworks and development processes are provided. This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems.
AI Accountability Framework Created to Guide Use of AI in Security
Europol has announced the development of a new AI accountability framework designed to guide the use of artificial intelligence (AI) tools by security practitioners. The move represents a major milestone in the Accountability Principles for Artificial Intelligence (AP4AI) project, which aims to create a practical toolkit that can directly support AI accountability when used in the internal security domain. The "world-first" framework was developed in consultation with experts from 28 countries, representing law enforcement officials, lawyers and prosecutors, data protection and fundamental rights experts, as well as technical and industry experts. The initiative began in 2021 amid growing interest in and use of AI in security, both by internal cybersecurity teams and law enforcement agencies to tackle cybercrime and other offenses. Research conducted by the AP4AI demonstrated significant public support for this approach; in a survey of more than 5500 citizens across 30 countries, 87% of respondents agreed or strongly agreed that AI should be used to protect children and vulnerable groups and to investigate criminals and criminal organizations.
A security practitioner's roadmap to artificial intelligence
Artificial Intelligence applies algorithms to leverage deep learning and other techniques to solve actual problems. The sub-field of Vision Intelligence impacts the physical security industry directly when you consider that surveillance cameras are the ultimate end-point device, the "all-seeing eyes" of the Internet. Could "intelligence" be applied to video to create "human context" to eliminate the false alarms and tailgating pain points that have long plagued the physical security industry? As a security leader, Philip Jang, Sr. Manager Physical Systems & Technology at VMWare, has been exploring how AI technologies and digital transformation processes impact the physical security profession for several years. Jang has global enterprise experience in the planning, execution, delivery and automation of advanced technologies such as AI, Robotics, Drones, advanced analytics and IoT.
Reducing the Risks Posed by Artificial Intelligence
Artificial Intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Like humans, they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. However, it can also be used for good and should become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.
Reducing the Risks Posed by Artificial Intelligence
Artificial Intelligence (AI) is creating a new frontier in information security. Systems that independently learn, reason and act will increasingly replicate human behavior. Like humans, they will be flawed, but also capable of achieving great things. AI poses new information risks and makes some existing ones more dangerous. However, it can also be used for good and should become a key part of every organization's defensive arsenal. Business and information security leaders alike must understand both the risks and opportunities before embracing technologies that will soon become a critically important part of everyday business.
The AI Manifesto
We live in a time of rapid technological change, where nearly every aspect of our lives now relies on devices that compute and connect. The resulting exponential increase in the use of cyber-physical systems has transformed industry, government, and commerce; what's more, the speed of innovation shows no signs of slowing down, particularly as the revolution in artificial intelligence (AI) stands to transform daily life even further through increasingly powerful tools for data analysis, prediction, security, and automation.1 Like past waves of extreme innovation, as this one crests, debate over ethical usage and privacy controls are likely to proliferate. So far, the intersection of AI and society has brought its own unique set of ethical challenges, some of which have been anticipated and discussed for many years, while others are just beginning to come to light. For example, academics and science fiction authors alike have long pondered the ethical implications of hyper-intelligent machines, but it's only recently that we've seen real-world problems start to surface, like social bias in automated decision-making tools, or the ethical choices made by self-driving cars.2, 5 During the past two decades, the security community has increasingly turned to AI and the power of machine learning (ML) to reap many technological benefits, but those advances have forced security practitioners to navigate a proportional number of risks and ethical dilemmas along the way. As the leader in the development of AI and ML for cybersecurity, BlackBerry Cylance is at the heart of the debate and is passionate about advancing the use of AI for good.
- Asia > China (0.14)
- North America > United States > Florida > Broward County (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Asia > Indonesia (0.04)
- Information Technology > Security & Privacy (1.00)
- Transportation > Air (0.94)
- Government > Immigration & Customs (0.94)
- (3 more...)
Anomaly Detection with Unsupervised AI in MixMode: Why Threat Intel Alone is Not Enough - MixMode
Historically, the MixMode platform has provided its users with a forensic hunting platform with intel-based Indicators and Security Events from public & proprietary sources. While these detections still have their place in the security ecosystem, the increase in state-sponsored attacks, insider threats and adversarial artificial intelligence means there are simply too many threats to your network to rely on solely intelligence-based detections or proactive hunting. Many of these threats are sophisticated enough to evade traditional threat detection or, in the case of zero-day threats, signature-based detection may not even be possible. In the face of this growing threat, the best defense is to supplement these traditional methods with anomaly detection, a term that is quickly becoming genericized as it is rapidly bandied about within the industry. Here we will discuss some of the opportunities and challenges that can arise with anomaly detection as well as MixMode's unique approach to the solution.
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.35)
Should we give AI the key to our security? Access AI
The cyber security industry is a good example of a field where artificial intelligence (AI) is both being looked to as a near-magical perfect solution while also already being deployed in a practical way every day. But can we trust it? The cyber world is notoriously unbalanced, with the hostile attackers having their pick of thousands of vulnerabilities to launch their strikes, along with deploying an ever-increasing arsenal of tools to evade detection once they have breached a system. While they only have to be successful once, the security teams tasked with defending a system have to stop every attack, every time. The inhuman speed and power of an advanced AI would be able to tip these scales at last, levelling the playing field for the security practitioners who are constantly on the back foot.